skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Jambulapati, A"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Placing a dataset A={ai}i∈[n]⊂ℝd in radial isotropic position, i.e., finding an invertible R∈ℝd×d such that the unit vectors {(Rai)‖Rai‖−12}i∈[n] are in isotropic position, is a powerful tool with applications in functional analysis, communication complexity, coding theory, and the design of learning algorithms. When the transformed dataset has a second moment matrix within a exp(±ϵ) factor of a multiple of Id, we call R an ϵ-approximate Forster transform. We give a faster algorithm for computing approximate Forster transforms, based on optimizing an objective defined by Barthe [Barthe98]. When the transform has a polynomially-bounded aspect ratio, our algorithm uses O(ndω−1(nϵ)o(1)) time to output an ϵ-approximate Forster transform with high probability, when one exists. This is almost the natural limit of this approach, as even evaluating Barthe's objective takes O(ndω−1) time. Previously, the state-of-the-art runtime in this regime was based on cutting-plane methods, and scaled at least as ≈n3+n2dω−1. We also provide explicit estimates on the aspect ratio in the smoothed analysis setting, and show that our algorithm similarly improves upon those in the literature. To obtain our results, we develop a subroutine of potential broader interest: a reduction from almost-linear time sparsification of graph Laplacians to the ability to support almost-linear time matrix-vector products. We combine this tool with new stability bounds on Barthe's objective to implicitly implement a box-constrained Newton's method [CMTV17, ALOW17]. 
    more » « less
    Free, publicly-accessible full text available April 8, 2026
  2. In the recent literature on machine learning and decision making, calibration has emerged as a desirable and widely-studied statistical property of the outputs of binary prediction models. However, the algorithmic aspects of measuring model calibration have remained relatively less well-explored. Motivated by [BGHN23], which proposed a rigorous framework for measuring distances to calibration, we initiate the algorithmic study of calibration through the lens of property testing. We define the problem of calibration testing from samples where given n draws from a distribution  on (predictions,binaryoutcomes), our goal is to distinguish between the case where  is perfectly calibrated, and the case where  is ε-far from calibration. We make the simple observation that the empirical smooth calibration linear program can be reformulated as an instance of minimum-cost flow on a highly-structured graph, and design an exact dynamic programming-based solver for it which runs in time O(nlog2(n)), and solves the calibration testing problem information-theoretically optimally in the same time. This improves upon state-of-the-art black-box linear program solvers requiring Ω(nω) time, where ω>2 is the exponent of matrix multiplication. We also develop algorithms for tolerant variants of our testing problem improving upon black-box linear program solvers, and give sample complexity lower bounds for alternative calibration measures to the one considered in this work. Finally, we present experiments showing the testing problem we define faithfully captures standard notions of calibration, and that our algorithms scale efficiently to accommodate large sample sizes. 
    more » « less
  3. The k-principal component analysis (k-PCA) problem is a fundamental algorithmic primitive that is widely-used in data analysis and dimensionality reduction applications. In statistical settings, the goal of k-PCA is to identify a top eigenspace of the covariance matrix of a distribution, which we only have black-box access to via samples. Motivated by these settings, we analyze black-box deflation methods as a framework for designing k-PCA algorithms, where we model access to the unknown target matrix via a black-box 1-PCA oracle which returns an approximate top eigenvector, under two popular notions of approximation. Despite being arguably the most natural reduction-based approach to k-PCA algorithm design, such black-box methods, which recursively call a 1-PCA oracle k times, were previously poorly-understood. Our main contribution is significantly sharper bounds on the approximation parameter degradation of deflation methods for k-PCA. For a quadratic form notion of approximation we term ePCA (energy PCA), we show deflation methods suffer no parameter loss. For an alternative well-studied approximation notion we term cPCA (correlation PCA), we tightly characterize the parameter regimes where deflation methods are feasible. Moreover, we show that in all feasible regimes, k-cPCA deflation algorithms suffer no asymptotic parameter loss for any constant k. We apply our framework to obtain state-of-the-art k-PCA algorithms robust to dataset contamination, improving prior work in sample complexity by a 𝗉𝗈𝗅𝗒(k) factor. 
    more » « less